54 research outputs found
Expliciting semantic relations between ontologies in large ontology repositories
and other research outputs Expliciting semantic relations between ontologies in large ontology repositorie
Recommended from our members
A platform for semantic web studies
The Semantic Web can be seen as a large, heterogeneous network of ontologies and semantic documents. Characterizing these ontologies, the way they relate and the way they are organized can help in better understanding how knowledge is produced and published online. It also provides new ways to explore and exploit this large collection of ontologies. In this paper, we present the foundation of a research platform for characterizing the Semantic Web, relying on the collection of ontologies and the functionalities provided by the Watson Semantic Web search engine. We more specifically focus on formalizing and monitoring relationships between ontologies online, considering a variety of different relations (similarity, versioning, agreement, modularity) and how they can help us obtaining meaningful overviews of the current state of the Semantic Web
Recommended from our members
DOOR: towards a formalization of ontology relations
In this paper, we describe our ongoing effort in describing and formalizing semantic relations that link ontolo- gies with each others on the Semantic Web in order to create an ontology, DOOR, to represent, manipulate and reason upon these relations. DOOR is a Descriptive Ontology of Ontology Relations which intends to define relations such as inclusion, versioning, similarity and agreement using ontological primitives as well as rules. Here, we provide a detailed description of the methodology used to design the DOOR ontology, as well as an overview of its content. We also describe how DOOR is used in a complete framework (called KANNEL) for detecting and managing semantic relations between ontologies in large ontology repositories. Applied in the context of a large collection of automatically crawled ontologies, DOOR and KANNEL provide a starting point for analyzing the underlying structure of the network of ontologies that is the Semantic Web
Recommended from our members
Detecting different versions of ontologies in large ontology repositories
Recommended from our members
SPARQL Query Recommendations by Example
In this demo paper, a SPARQL Query Recommendation Tool (called SQUIRE) based on query reformulation is presented. Based on three steps, Generalization, Specialization and Evaluation, SQUIRE implements the logic of reformulating a SPARQL query that is satisfiable w.r.t a source RDF dataset, into others that are satisfiable w.r.t a target RDF dataset. In contrast with existing approaches, SQUIRE aims at rec- ommending queries whose reformulations: i) reflect as much as possible the same intended meaning, structure, type of results and result size as the original query and ii) do not require to have a mapping between the two datasets. Based on a set of criteria to measure the similarity between the initial query and the recommended ones, SQUIRE demonstrates the feasibility of the underlying query reformulation process, ranks appropriately the recommended queries, and offers a valuable support for query recommendations over an unknown and unmapped target RDF dataset, not only assisting the user in learning the data model and content of an RDF dataset, but also supporting its use without requiring the user to have intrinsic knowledge of the data
SPARQL Query Recommendation by Example: Assessing the Impact of Structural Analysis on Star-Shaped Queries
One of the existing query recommendation strategies for unknown datasets is "by example", i.e. based on a query that the user already knows how to formulate on another dataset within a similar domain. In this paper we measure what contribution a structural analysis of the query and the datasets can bring to a recommendation strategy, to go alongside approaches that provide a semantic analysis. Here we concentrate on the case of star-shaped SPARQL queries over RDF datasets.
The illustrated strategy performs a least general generalization on the given query, computes the specializations of it that are satisfiable by the target dataset, and organizes them into a graph. It then visits the graph to recommend first the reformulated queries that reflect the original query as closely as possible. This approach does not rely upon a semantic mapping between the two datasets. An implementation as part of the SQUIRE query recommendation library is discussed
DeepOnto: A Python Package for Ontology Engineering with Deep Learning
Applying deep learning techniques, particularly language models (LMs), in
ontology engineering has raised widespread attention. However, deep learning
frameworks like PyTorch and Tensorflow are predominantly developed for Python
programming, while widely-used ontology APIs, such as the OWL API and Jena, are
primarily Java-based. To facilitate seamless integration of these frameworks
and APIs, we present Deeponto, a Python package designed for ontology
engineering. The package encompasses a core ontology processing module founded
on the widely-recognised and reliable OWL API, encapsulating its fundamental
features in a more "Pythonic" manner and extending its capabilities to include
other essential components including reasoning, verbalisation, normalisation,
projection, and more. Building on this module, Deeponto offers a suite of
tools, resources, and algorithms that support various ontology engineering
tasks, such as ontology alignment and completion, by harnessing deep learning
methodologies, primarily pre-trained LMs. In this paper, we also demonstrate
the practical utility of Deeponto through two use-cases: the Digital Health
Coaching in Samsung Research UK and the Bio-ML track of the Ontology Alignment
Evaluation Initiative (OAEI).Comment: under review at Semantic Web Journa
Crowdsourcing Linked Data on listening experiences through reuse and enhancement of library data
Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser
Recommended from our members
Toward a Symbolic AI Approach to the WHO/ACSM Physical Activity Sedentary Behavior Guideline
The World Health Organization and the American College of Sports Medicine have released guidelines on physical activity and sedentary behavior, as part of an effort to reduce inactivity worldwide. However, to date, there is no computational model that can facilitate the integration of these recommendations into health solutions (e.g., digital coaches). In this paper, we present an operational and machine-readable model that represents and is able to reason about these guidelines. To this end, we adopted a symbolic AI approach that combines two paradigms of research in knowledge representation and reasoning: ontology and rules. Thus, we first present HeLiFit, a domain ontology implemented in OWL, which models the main entities that characterize the definition of physical activity, as defined per guidance. Then, we describe HeLiFit-Rule, a set of rules implemented in the RDFox Rule language, which can be used to represent and reason with these recommendations in concrete real-world applications. Furthermore, to ensure a high level of syntactic/semantic interoperability across different systems, our framework is also compliant with the FHIR standard. Through motivating scenarios that highlight the need for such an implementation, we finally present an evaluation of our model that provides results that are both encouraging in terms of the value of our solution and also provide a basis for future work
Recommended from our members
Towards a Symbolic AI Approach to the WHO/ACSM Physical Activity & Sedentary Behaviour Guidelines
The World Health Organization and the American College of Sports Medicine have released guidelines on physical activity and sedentary behaviour, as part of an effort to reduce inactivity world-wide. However, to date, there is no computational model that can facilitate the integration of these recommendations into health solutions (e.g., Digital Coaches). In this paper, we present an operational and machine-readable model that represents and is able to reason about these guidelines. To this end, we adopted a Symbolic AI approach that combines two paradigms of research in Knowledge Representation and Reasoning: Ontology and Rules. Thus, we first present HeLiFit, a domain ontology implemented in OWL, which models the main entities that characterize the definition of physical activity, as defined per guidance. Then, we describe HeLiFit-Rule, a set of rules implemented in the RDFox Rule language, which can be used to represent and reason with these recommendations in concrete real-world applications. Furthermore, to ensure a high level of syntactic/semantic interoperability across different systems, our framework is also compliant with the FHIR standard. Through motivating scenarios that highlight the need for such an implementation, we finally present an evaluation of our model that provides results that are both encouraging in terms of the value of our solution, and also provide a basis for future work
- …